Goto

Collaborating Authors

 openai leader



What the Firing and Rehiring of Sam Altman Actually Means

Slate

Folks, if you predicted on Friday that the closely watched OpenAI power struggle would end in the most pointless-seeming way possible … well, just look. Late Tuesday night, four days after CEO Sam Altman's shocking ouster from the A.I. company, we found ourselves (mostly) back where we started: Altman is returning to OpenAI as its CEO, albeit not to its board of directors; Greg Brockman is once again president of OpenAI, but also will not be a member of the board; Mira Murati, who briefly took the helm as interim CEO, is just regular ol' CTO again; the three researchers who'd stepped down Friday in solidarity with Altman and Brockman are either back at the company or requesting to return; Altman & co. will once again operate with the backing of Microsoft, not as direct employees of the Big Tech pioneer. When it comes to the Main Characters of this saga and their loyalists, it seems most everyone's pretty happy. "[W]e are so back," Brockman exclaimed, sharing a selfie with his smiling team (who celebrated, according to the Information's Erin Woo, by setting off a false fire alarm at OpenAI HQ). Twitch co-founder Emmett Shear is no longer interim CEO but is "deeply pleased by this result, after 72 very intense hours of work," and is "glad to have been a part of the solution."


The Latest on OpenAI Leaders' Stalled Efforts to Bring Back Sam Altman After He Was Fired

TIME - Tech

Efforts by a group of OpenAI executives and investors to reinstate Sam Altman to his role as chief executive officer reached an impasse over the makeup and role of the board, according to people familiar with the negotiations. Resolution could come quickly, though talks are fluid and ongoing. Altman, who was fired Friday, is open to returning but wants to see governance changes, including the removal of existing board members, said the people, who asked not to be identified because the negotiations are private. He's also seeking a statement absolving him of wrongdoing, they said. After facing intense outrage over the ouster, the board initially agreed in principle to step down, but have so far refused to officially do so.


AI will eventually need an international authority, OpenAI leaders say

FOX News

Sam Altman, the CEO of artificial intelligence lab OpenAI, told a Senate panel he welcomes federal regulation on the technology "to mitigate" its risks. The artificial intelligence field needs an international watchdog to regulate future superintelligence, according to the founder of OpenAI. In a blog post from CEO Sam Altman and company leaders Greg Brockman and Ilya Sutskever, the group said – given potential existential risk – the world "can't just be reactive," comparing the tech to nuclear energy. To that end, they suggested coordination among leading development efforts, highlighting that there are "many ways this could be implemented," including a project set up by major governments or curbs on annual growth rates. "Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc." they asserted.


AI could grow so powerful it replaces experienced professionals within 10 years, Sam Altman warns

FOX News

OpenAI CEO Sam Altman took questions from reporters after his congressional hearing, including defining "scary AI." Artificial intelligence could become so powerful that it replaces professional experts "in most domains" within the next decade, OpenAI CEO Sam Altman warned. Altman, the chief of the AI lab behind popular platforms such as ChatGPT, published a blog post this week with two other OpenAI leaders, Greg Brockman and Ilya Sutskever, warning that "we must mitigate the risks of today's AI technology. "It's conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today's largest corporations," reads the post, which was published on OpenAI's website. "In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there," the post continued. OPENAI CEO SAM ALTMAN REVEALS WHAT HE THINKS IS'SCARY' ABOUT AI Sam Altman, CEO and co-founder of OpenAI, speaks during a Senate Judiciary subcommittee hearing in Washington, D.C., on May 16, 2023. Altman and his fellow OpenAI executives compared artificial intelligence to nuclear energy and synthetic biology, arguing that regulations must be handled with "special treatment and coordination" to be effective. They suggested that a version of the International Atomic Energy Agency will be needed to regulate the "superintelligence" technology. "Any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc," they wrote. Altman appeared before Congress this month to discuss how to regulate artificial intelligence, saying he welcomes U.S. leaders to craft such rules. Following the hearing, Altman provided examples of "scary AI" to Fox News Digital, which included systems that could design "novel biological pathogens." "An AI that could hack into computer systems," he said. "I think these are all scary.